Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
Mesh segmentation
Florent Lafarge
Inria Sophia Antipolis - Mediterranee
Outline
What is mesh segmentation ?
M = {V,E,F} is a mesh
S is either V, E or F (usually F)
A Segmentation is a set of sub-meshes
induced by a partition of S into k disjoint
subsets
segmentation can also be called
partitioning or clustering
Outline
Formulating a mesh segmentation problem
Usually specifies by two key elements:
a function measuring the quality of a partition,
eventually under a set of constraints
a mechanism for finding an optimal partition.
Outline
Contents
Attributes and constraints
Segmentation algorithms
Outline
How to measure the quality of a segmentation ?
Outline
What are we looking for ?
Planar clusters?
Outline
What are we looking for ?
Planar clusters?
Smooth clusters?
Outline
What are we looking for ?
Planar clusters?
Smooth clusters?
Round clusters?
Outline
What are we looking for ?
Planar clusters?
Smooth clusters?
Round clusters?
Small/large clusters?
Outline
What are we looking for ?
Planar clusters?
Smooth clusters?
Round clusters?
Small/large clusters?
Small number of clusters?
Outline
What are we looking for ?
Planar clusters?
Smooth clusters?
Round clusters?
Small/large clusters?
Small number of clusters?
Smooth boundary?
Outline
What are we looking for ?
Planar clusters?
Smooth clusters?
Round clusters?
Small/large clusters?
Small number of clusters?
Smooth boundary?
structural clusters?
…
Outline
Attributes and constraints
Attributes:
criteria used to measure the quality of a partition with
respect to the input mesh
attributes are usually embedded into some metrics
Outline
Attributes and constraints
Attributes:
criteria used to measure the quality of a partition with
respect to the input mesh
attributes are usually embedded into some metrics
Constraints:
cardinality, geometry or topology of the clusters
must be preserved (hard) or favored (soft)
Outline
Attributes and constraints
example of problems
without constraints
with hard constraints
with sogt constraints
minP f(p,attributes)
minP f(p,attributes)
under g(p)=0
minP f(p,attributes)-α.g(p)
Outline
Attributes
Distance and Geodesic distance
Outline
Attributes
Distance and Geodesic distance
Planarity, normal direction
Outline
Attributes
Distance and Geodesic distance
Planarity, normal direction
Smoothness, curvature
Outline
Attributes
Distance and Geodesic distance
Planarity, normal direction
Smoothness, curvature
Distance to complex geometric primitives
Outline
Attributes
Distance and Geodesic distance
Planarity, normal direction
Smoothness, curvature
Distance to complex geometric primitives
Symmetry
Outline
Attributes
Distance and Geodesic distance
Planarity, normal direction
Smoothness, curvature
Distance to complex geometric primitives
Symmetry
Medial Axis, Shape diameter
Outline
Attributes
Distance and Geodesic distance
Planarity, normal direction
Smoothness, curvature
Distance to complex geometric primitives
Symmetry
Medial Axis, Shape diameter
Texture
…
Outline
Constraints
Cardinality
Not too small/large or a given number of clusters Overall balanced partition
Geometry Size: area, diameter, radius Convexity, Roundness Boundary smoothness
Topology Connectivity (single component) Disk topology
Outline
Contents
Attributes and constraints
Segmentation algorithms
Outline
Segmentation algorithms
Very large variety of «mechanisms» in the literature
Common with image processing/computer vision
Focus on some of them:
Thresholding
Region growing
Hierarchical partitioning
Markov Random Fields
Outline
Thresholding
simple
one parameter for a binary segmentation
0.00
500.00
1000.00
1500.00
2000.00
2500.00
0.00 50.00 100.00 150.00 200.00 250.00
i
h(i)
Outline
Hysteresis thresholding
idea: to keep the sites connected (favor large clusters)
two parameters for a binary segmentation tl < th
scheme
threshold the sites with tl
consider as cluster only the set of connected sites all
> tl with at least one site > th
Outline
Region growing
see primitive-based surface reconstruction slides
Mechanism
while the mesh is not entirely segmented
choose an unlabeled site (a facet)
assign label k to its similar adjacent sites
iterate on the new assigned sites..
when propagation of cluster k finished, update
k=k+1
Outline
Hierarchical partitioning
start from one region representing the entire mesh
divide into several regions when the criteria are not valid,
and iterate on the new regions
Hierarchical partitioning
Hierarchical partitioning
Hierarchical partitioning
Hierarchical partitioning
Outline
Markov Random Fields (MRF)
set of random variables having a Markov property
described by an undirected graph
Let V be the set of nodes in the graph
Card(V) = number of random variables in the MRF
Outline
Markov property
in 1D (Markov chain)
n usually corresponds to time
X0 X1 Xn Xn+1
Outline
Markov property
in 2D or on a manifold in 3D (Markov field)
P[ Xk| X –{Xk} ]= P[ Xk|(Xn(k) )] with n(k) neighbors of k
Xk3 Xk1
Xk
Xk4 Xk2
Xm
Xn
Xj
Outline
Markov property
in 2D or on a manifold in 3D (Markov field)
P[ Xk| X –{Xk} ]= P[ Xk|(Xn(k) )] with n(k) neighbors of k
Xm
Xn
Xj
Xk3 Xk1
Xk
Xk2
Xk4
Outline
Notion of neighborhood
N= {n(i) /i V } is a neighborhood system if
(a) i n(i)
(b) I n(j) j n(i)
a MRF is always associated to a neighborhood system
defining the dependency between graph nodes
Outline
MRF as an energy
Gibbs energy (Hammersley-Clifford theorem)
Let X be a MRF so that for all x , P(X=x)>0,
Then P(X) is a Gibbs distribution of the form
P(X=x)=exp –U(x)
U is called a Gibbs energy
Z
Outline
Markov property
Why is the markovian property important ?
graph with 1M nodes
if each node is adjacent to every other nodes:
1M*(999,999)/2 edges ~ 500 G edges
each random variable cannot be dependent to all the
other ones
complexity needs to be reduced by spatial
considerations
Outline
Markov Random Fields for images
two common graphs
nodes = pixel centers
edges = adjacent pixels (4-connexity)
nodes = pixel corners
edges = pixel borders
=
Outline
Markov Random Fields for meshes
Graph nodes = vertices & graph edges = edges
Graph nodes = facets & graph edges = edges
Graph nodes = edges & graph edges = facets
Outline
Bayesian formulation
Let y, the data (attributes)
x, the label
we want to model the probability of having x knowing y
Bayes law
Posterior probability
Likelihood
Prior probability
conditional independence of the observation
P(Y=y|X=x) = P(yi|xi)
X is an MRF
Standard assumptions
i V
From probability to energy
data term : local dependency hypothesis (l=x)
regularization : soft constraints
Regularisation term
= - log (pairwise interaction
prior) when Bayesian
Data term
= -log (likelihhood)
when Bayesian
We search for the label configuration x that
maximizes P(X=x | Y=y)
x= arg max Pr(X=x|Y=y)
= arg min U(x)
Optimal configuration
x
x
exercise: binary segmentation
Graph structure
Graph nodes = facets
Graph edges = common edges
Attributes on facet: [0,1] (y)
labels: {white, black} (l)
Energy:
with
or ?
0.1
0.1
0.2
0.8 0.6 0.9
0.7
0.7
0.6
0
0
0.2
0
0.1 0.1 0.9
0
0.1
otherwisey
whitelifylD
i
ii
ii1
'')(
otherwise
llifllV
ji
jiji1
0),(,
jji
exercise: binary segmentation
Graph structure
Graph nodes = facets
Graph edges = common edges
Attributes on facet: [0,1] (y)
labels: {white, black} (l)
Energy:
with
or ?
0.1
0.1
0.2
0.8 0.6 0.9
0.7
0.7
0.6
0
0
0.2
0
0.1 0.1 0.9
0
0.1
otherwisey
whitelifylD
i
ii
ii1
'')(
otherwise
llifllV
ji
jiji1
0),(,
Q1: what is the optimal configuration l if β =0 ? What is its
energy ?
Q2: what is the optimal configuration if β inf ?
Q3: what are the other possible optimal configurations in
function of β ?
or ?
exercise: binary segmentation
Finding the optimal configuration of labels
Graph-cut based approches
fast but restrictions on energy formulation
Monte Carlo sampling
slow but no restriction
Optimization by graph-cut
Find a cut in the graph separating the nodes into
different groups
n-links
s
t a cut sink
source
Capacity of a cut (S,T) :
Sx Ty
yxcTSc ,,
Etiquetage binaire Min-cut formulation
s
t cut
L(p)
p
“cut”
x
y
labels
x
y
labels
Etiquetage non-binaire Multi-label cut
α-swap
each variable taking label α or can change its label
to α or
all α-moves are iteratively performed till
convergence
α-swap
Requirements
V(li,li)=0
V(li,lj)=V(lj,li) 0
α-expansion
each variable either keeps its old label or changes
to α
all α-moves are iteratively performed till
convergence
α-expansion
Requirements
V(li,li)=0
V(li,lj)=V(lj,li) 0
V(li,lj) + V(lj,lk) V(li,lk)
Optimisation : a expansion α-expansion: example
-expansion
-expansion
-expansion
-expansion
-expansion
-expansion
-expansion
Optimisation : a expansion
X0 X1
X2 X3 …
Optimization by Monte Carlo sampling
simulating a Markov chain on the
configuration space
For all x, x1, x2,… xn ,
Optimization by Monte Carlo sampling
the transitions correspond to perturbations of the
current configuration. They are proposed according to
a proposal density Q (or « kernel »)
Optimization by Monte Carlo sampling
the chain is built to be ergodic in order to ensure the
convergence toward a stationary measure (specified
via an energy)
Optimization by Monte Carlo sampling
the chain is built to be ergodic in order to ensure the
convergence toward a stationary measure (specified
via an energy)
reversibility
The probability to be in the configuration x AND to
move to the configuration y is equal to the
probability to be in the configuration y AND to
move to the configuration x (detailed balance
condition)
Optimization by Monte Carlo sampling
the chain is built to be ergodic in order to ensure the
convergence toward a stationary measure (specified
via an energy)
reversibility
aperiodicity
Avoid to fall in cycles in the chain
For all x , Q(x x) >0
Optimization by Monte Carlo sampling
the chain is built to be ergodic in order to ensure the
convergence toward a stationary measure (specified
via an energy)
reversibility
aperiodicity
irreductibility
any configuration in can be reached under
from any initial configuration in , in finite time
Optimization by Monte Carlo sampling
each iteration is composed of 2 steps
proposition of a new state
decision of moving to this new state or staying in
the current one
Markov Chain Monte Carlo: algorithm
At the iteration t, if the current state is Xt=x,
simulate according to the proposal density , a new
configuration y close to the configuration x
simulate
accept Xt+1=y if , and keep Xt+1=x
otherwise.
Markov Chain Monte Carlo: algorithm
At the iteration t, if the current state is Xt=x,
simulate according to the proposal density , a new
configuration y close to the configuration x
simulate
accept Xt+1=y if , and keep Xt+1=x
otherwise.
The density h is specified by an energy U such that
Introduction of a stochastic relaxation:
The relaxation parameter (called temperature)
tends to 0 as t tends to infinity
allows the control of the configuration space
exploration by imposing to the process to be more
and more selective.
7
MCMC as an optimization technique
At the iteration t, if the current state is Xt=x,
simulate according to the proposal density , a new
configuration y close to the configuration x
simulate
accept Xt+1=y if , and keep Xt+1=x
otherwise.
MCMC as an optimization technique
configuration space: = {1,2,3,4,5,6}nb pixels
energy: Gaussian likelihood + Potts model
An image segmentation problem
Example: mesh segmentation with principal
curvature attributes & soft geometric constraints
Multi-label energy model of the form
with V, set of vertices of the input mesh
E, set of edges in the mesh
li, the label of the vertex i among : planar (1), developable convex (2), developable concave (3) and non developable (4)
Data term
with
Data term
with
Soft constraints
Label smoothness
Edge preservation
with
Optimization
Some results