Multi-Task Semi-Supervised Underwater Mine Detection

Preview:

DESCRIPTION

Multi-Task Semi-Supervised Underwater Mine Detection. Lawrence Carin, Qiuhua Liu and Xuejun Liao Duke University Jason Stack Office of Naval Research. Intra-Scene Context. Individual Signatures Processed by Supervised Classifiers. What Analyst Processes. Message: - PowerPoint PPT Presentation

Citation preview

1

Multi-Task Semi-Supervised Underwater Mine Detection

Lawrence Carin, Qiuhua Liu and Xuejun Liao

Duke University

Jason Stack

Office of Naval Research

Intra-Scene Context

What Analyst Processes Individual Signatures Processedby Supervised Classifiers

Message:

Analyst Places Classification of Any Given Item Within Context of All Items in the SceneSupervised Classifier Classifies Each Item in Isolation

Decision surface based on labeled data (supervised)

Decision surface based on labeled & Unlabeled data (semi-supervised)

Inter-Scene Context

8

Message

Humans are very good at exploiting context, both within a given scene and across multiple scenes

Intra-scene context: semi-supervised learning

Inter-scene context: multi-task and transfer learning

A major focus of machine learning these days

9

Data Manifold Representation Based on Markov Random Walks

Given X={x1, …,xN}, first construct a graph G=(X,W), with the affinity matrix W, where the (i, j)-th element of W is defined by a Gaussian kernel:

we consider a Markov transition matrix A, which defines a Markov random walk, where the (i, j)-th element:

gives the probability of walking from xi to xj by a single step.

The one-step Markov random work provides a local similarity measure between data points.

)2/exp( 22

ijiij xxw

N

k ik

ijij

w

wa

1

10

Semi-Supervised Multitask Learning(1/2)

Semi-supervised MTL: Given M partially labeled data manifolds, each defining a classification task, we propose a unified sharing structure to learn the M classifiers simultaneously.

The Sharing Prior: We consider M PNBC classifiers, parameterized by

The M classifiers are not independent but coupled by a joint prior distribution:

,m....,2,1 Mm

M

mmmM pp

1111 ),..,|(),..,(

11

Semi-Supervised Multitask Learning(2/2)

With

The normal distributions indicates the meta-knowledge indicating how the present task should be learned, based on the experience with a previous task.

When there are no previous tasks, only the baseline prior is used by setting m=1 =>PNBC.

Sharing tasks to have similar , not exactly the same(advantages over the Dirac delta function used in previous MTL work).

s'

1

1

211 ),;()|(

1

1),..,|(

m

lmllmmmm Np

mp Iγ

Baseline prior Prior transferred from previous tasks

Balance parameter

13

14

15

Thanks

Recommended